Variational Optimization
نویسندگان
چکیده
We discuss a general technique that can be used to form a differentiable bound on the optima of non-differentiable or discrete objective functions. We form a unified description of these methods and consider under which circumstances the bound is concave. In particular we consider two concrete applications of the method, namely sparse learning and support vector classification. 1 Optimization by Variational Bounding We consider the general problem of function maximization, max x f(x) for vector x. When f is differentiable and x continuous, optimization methods that use gradient information are typically preferred over nongradient based approaches since they are able to take advantage of a locally optimal direction in which to search. However, in the case that f is not differentiable or x is discrete, gradient based approaches are not directly applicable. In that case, alternatives such as relaxation, coordinate-wise optimization and stochastic approaches are popular [1]. Our interest is to discuss another general class of methods that yield differentiable surrogate objectives for discrete x or non-differentiable f . The Variational Optimization (VO) approach is based on the simple bound f∗ = max x∈C f(x) ≥ 〈 f(x) 〉 p(x|θ) ≡ E(θ) (1) where 〈·〉p denotes expectation with respect to the distribution p defined over the solution space C. The parameters θ of the distribution p(x|θ) can then be adjusted to maximize the lower bound E(θ). This bound can be trivially made tight provided the distribution p(x|θ) is flexible enough to allow all its mass to be placed in the optimal state x∗ = argmaxx f(x). Under mild restrictions, the bound is differentiable, see section(1.1), and the bound is a smooth alternative objective function (see also section(4.1) on the relation to ‘smoothing’ methods). The degree of smoothness (and the deviation from the original objective) increases as the dispersion of the variational distribution increases. In section(1.2) we give sufficient conditions for the variational bound to be concave. The purpose of this paper is to demonstrate the ease with which VO can be applied and to discuss its merits as a general way to construct a smooth alternative objective. 1.1 Differentiability of the variational objective When f(x) is not differentiable, under weak conditions E(θ) can be made differentiable. The gradient of E(θ) is given by ∂E ∂θ = ∂ ∂θ ∫
منابع مشابه
Vector Optimization Problems and Generalized Vector Variational-Like Inequalities
In this paper, some properties of pseudoinvex functions, defined by means of limiting subdifferential, are discussed. Furthermore, the Minty vector variational-like inequality, the Stampacchia vector variational-like inequality, and the weak formulations of these two inequalities defined by means of limiting subdifferential are studied. Moreover, some relationships between the vector vari...
متن کاملSequential Optimality Conditions and Variational Inequalities
In recent years, sequential optimality conditions are frequently used for convergence of iterative methods to solve nonlinear constrained optimization problems. The sequential optimality conditions do not require any of the constraint qualications. In this paper, We present the necessary sequential complementary approximate Karush Kuhn Tucker (CAKKT) condition for a point to be a solution of a ...
متن کاملOptimization of Solution Regularized Long-wave Equation by Using Modified Variational Iteration Method
In this paper, a regularized long-wave equation (RLWE) is solved by using the Adomian's decomposition method (ADM) , modified Adomian's decomposition method (MADM), variational iteration method (VIM), modified variational iteration method (MVIM) and homotopy analysis method (HAM). The approximate solution of this equation is calculated in the form of series which its components are computed by ...
متن کاملVARIATIONAL ITERATION METHOD FOR FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND
In this paper, He‘s variational iteration method is applied to Fredholm integral equations of the second kind. To illustrate the ability and simplicity of the method, some examples are provided. The results reveal that the proposed method is very effective and simple and for first fourth examples leads to the exact solution.
متن کاملRBF-Chebychev direct method for solving variational problems
This paper establishes a direct method for solving variational problems via a set of Radial basis functions (RBFs) with Gauss-Chebyshev collocation centers. The method consist of reducing a variational problem into a mathematical programming problem. The authors use some optimization techniques to solve the reduced problem. Accuracy and stability of the multiquadric, Gaussian and inverse multiq...
متن کاملAn Iterative Scheme for Generalized Equilibrium, Variational Inequality and Fixed Point Problems Based on the Extragradient Method
The problem ofgeneralized equilibrium problem is very general in the different subjects .Optimization problems, variational inequalities, Nash equilibrium problem and minimax problems are as special cases of generalized equilibrium problem. The purpose of this paper is to investigate the problem of approximating a common element of the set of generalized equilibrium problem, variational inequal...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1212.4507 شماره
صفحات -
تاریخ انتشار 2012